Goto

Collaborating Authors

 affiliation address email 1



Appendix for " Disentangled Wasserstein Autoencoder for Protein Engineering " Anonymous Author(s) Affiliation Address email 1 Data preparation 1 1.1 Combination of data sources 2

Neural Information Processing Systems

We repeat this process until the size of the negative set is 5x that of the positive set. The expanded dataset is then provided to the respective ERGO model. Any unobserved pair is treated as negative. Performance is shown in Table S2. TCRs that have more than one positive prediction or have at least one wrong prediction.


OpenShape: Scaling Up3D Shape Representation Towards Open-World Understanding - Supplementary Material Anonymous Author(s) Affiliation Address email 1 More Examples of Multi-Modal 3D Shape Retrieval

Neural Information Processing Systems

We leverage the metadata from the four datasets to generate the raw texts. Objaverse:We utilize the name associated with each shape to serve as the text. In this way, we generate one or more raw texts for each shape. I am analyzing a 3D dataset with various text descriptions for the 3D models. If a text contains a clear noun (or noun phrase) that could potentially describe a 3D object, please respond with "Y".


Masked Image Modeling Supplementary Material Anonymous Author(s) Affiliation Address email 1 More Training Details 1

Neural Information Processing Systems

We use the same setting for different sizes RevCol models on MIM pre-training. The hyper-parameters generally follow [4, 2]. Table 3 shows the detail training settings after MIM pre-training. We also show training settings on ImageNet-1K after ImageNet-22K fine-tuning. For semantic segmentation, we evaluate different backbones on ADE20K dataset.



AnonymousAuthor(s) Affiliation Address email 1 AdditionalResults1

Neural Information Processing Systems

Weuse the twohighest frequencyones which result in 776 label categories. Thelearningrateis12 decreased by afactor of 10 atthe end of 10th and 20th epochs. The networks are trained for 36epochs. Since the all the labels for the test images are not annotated, we only evaluate the performance of17 our model on the set of annotated labels. Hence false positive can happen only if apositively18 annotated label is predicted as a negative class.


Supplementary Material for SegRefiner: Towards Model-Agnostic Segmentation Refinement with Discrete Diffusion Process Anonymous Author(s) Affiliation Address email 1 Implementation Details 1

Neural Information Processing Systems

The overall workflow of the training and inference process are provided in Alg. 1 and Alg. 2. Model Architecture Following [ 9 ], we use a U-Net with 4-channel input and 1-channel output. Both input and output resolution is set to 256 256 . Training Settings All experiments are conducted on 8 NVIDIA RTX3090 GPUs with Pytorch. After a complete reverse diffusion process, the output is resized to the original size. We apply Non-Maximum Suppression (NMS, with 0.3 as threshold) to these patches to remove Our SegRefiner can robustly correct prediction errors both outside and inside the coarse mask.



Appendix for " Disentangled Wasserstein Autoencoder for Protein Engineering " Anonymous Author(s) Affiliation Address email 1 Data preparation 1 1.1 Combination of data sources

Neural Information Processing Systems

We repeat this process until the size of the negative set is 5x that of the positive set. The expanded dataset is then provided to the respective ERGO model. Any unobserved pair is treated as negative. Performance is shown in Table S2. TCRs that have more than one positive prediction or have at least one wrong prediction.


OpenShape: Scaling Up3D Shape Representation Towards Open-World Understanding - Supplementary Material Anonymous Author(s) Affiliation Address email 1 More Examples of Multi-Modal 3D Shape Retrieval

Neural Information Processing Systems

We leverage the metadata from the four datasets to generate the raw texts. Objaverse:We utilize the name associated with each shape to serve as the text. In this way, we generate one or more raw texts for each shape. I am analyzing a 3D dataset with various text descriptions for the 3D models. If a text contains a clear noun (or noun phrase) that could potentially describe a 3D object, please respond with "Y".